237 research outputs found

    A note on p-values interpreted as plausibilities

    Full text link
    P-values are a mainstay in statistics but are often misinterpreted. We propose a new interpretation of p-value as a meaningful plausibility, where this is to be interpreted formally within the inferential model framework. We show that, for most practical hypothesis testing problems, there exists an inferential model such that the corresponding plausibility function, evaluated at the null hypothesis, is exactly the p-value. The advantages of this representation are that the notion of plausibility is consistent with the way practitioners use and interpret p-values, and the plausibility calculation avoids the troublesome conditioning on the truthfulness of the null. This connection with plausibilities also reveals a shortcoming of standard p-values in problems with non-trivial parameter constraints.Comment: 13 pages, 1 figur

    Parameter Expansion and Efficient Inference

    Full text link
    This EM review article focuses on parameter expansion, a simple technique introduced in the PX-EM algorithm to make EM converge faster while maintaining its simplicity and stability. The primary objective concerns the connection between parameter expansion and efficient inference. It reviews the statistical interpretation of the PX-EM algorithm, in terms of efficient inference via bias reduction, and further unfolds the PX-EM mystery by looking at PX-EM from different perspectives. In addition, it briefly discusses potential applications of parameter expansion to statistical inference and the broader impact of statistical thinking on understanding and developing other iterative optimization algorithms.Comment: Published in at http://dx.doi.org/10.1214/10-STS348 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    On Exact and Efficient Inference for Many Normal Means

    Full text link
    Inference about the unknown means θ=(θ1,...,θn)′∈Rn\theta=(\theta_1,...,\theta_n)' \in \mathbb{R}^n in the sampling model X∼Nn(θ,I)X\sim N_n(\theta, I) from the observed XX, known as the many-normal-means problem, has proven to be fundamental and yet challenging inasmuch as a satisfactory understanding remains elusive. To tackle exact and efficient inference about θ\theta, in this paper we propose innovative formulations of Inferential Models for two kinds of this problem: the {\it classic} kind given as is and the {\it empirical-Bayes} kind where θ\theta's are further assumed to be an unobservable sample from an unknown non-parametric distribution G(.)G(.). The formulation for the empirical-Bayes kind via numerical deconvolution allows for prior-free probabilistic inference with over-parameterization for the non-parametric model G(.)G(.), whereas the formation for the first kind utilizes a latent random permutation and as a result provides a sound reasoning with uncertainty toward a deeper understanding. For uncertainty quantification with the more familiar frequentist inference framework, the method of maximum plausibility estimation is used for point estimation. Exact but conservative interval estimation is obtained based on plausibility, with a Monte Carlo based adaptive-adjustment approach to constructing shorter confidence intervals with targeted coverage. These methods are illustrated via simulation studies and a real-data example. The numerical results show that for interval estimation, adaptive intervals are satisfactory in both coverage and efficiency and that for point estimation, the proposed methods outperform the traditional James-Stein and Efron's g-modeling in terms of mean square error. The paper concludes with a few remarks, including future developments and extensions of the proposed methods.Comment: 20 page
    • …
    corecore